by John Bownds, Michael Ebersole, David Lovelock and Daniel O'Connor
Copyright ©1994.
Section I
The authors will examine several aspects of the search management function and search theory in the following pages. Topics include: the Mattson Consensus, search area segmentation, optimizing resources in a search, search terminology, how to treat clues, and information about the CASIE search software. Before launching into the first topic, let's define or perhaps re-define some commonly used and confused search terminology. We'll discuss the uses and abuses of these terms later. POA technically means Probability of Area. While often used many different ways, POA usually means, "What is the likelihood that the subject is in this particular search segment?" POA changes for each segment after any portion of the total search area is searched. POD, according to strict search theory, means Probability of Detection. As you'll see later, while the term "POD" can have several meanings, it often refers to a measure of a search team's effectiveness after coming in from a period of searching. "How well has this segment been searched?" PODs can be applied to resources or segments. POS is an abbreviation for Probability of Success, and is the product of a search segment's POA and POD (POS = POA × POD). As discussed later, it is not always clear what this number has to do with success, especially if the subject has yet to be found. A Mattson Consensus establishes initial POAs for a new search, allowing search managers to deploy resources as they see fit. The Mattson Consensus is an averaging of opinions, and is actually an educated guess (based on subject behavior, lost person characteristics, and the experience, knowledge and hunches of the search experts) of where the subject is most likely to be. ROW stands for Rest of the World and refers to all of the region outside the designated search area. Recall how POAs and PODs should be used in typical large-scale searches. Based on the current POAs, resources are deployed into the various search segments. If the subject is found, then the search function stops. However, if the subject is not found, the resources are evaluated for how well they searched their assigned segment, producing PODs. These PODs and initial POAs are then used to generate updated POAs. These new POAs become the current POAs, and the entire sequence of events is repeated, until either the subject is found, or the probability that the subject is in the ROW is so large that the search is suspended. You must be aware, however, that this method assumes the subject remains in the same search segment throughout the search. The methodology described in this paper assumes that the subject is stationary. If the subject is mobile, computed values can be useful to the search manager but the underlying search theory is not yet clearly defined.
When an agency or organization makes the decision to conduct a full-scale search, the Incident Commander or Search Manager must immediately do two things before resources are deployed: (1) divide the search area into manageable segments, and (2) coordinate the establishment of a Mattson Consensus. Later in this paper we will present some thoughts on search segmentation, but right now let's turn our attention to the Mattson Consensus. Establishing a Mattson Consensus requires the input of a team of "experts", personnel knowledgeable in search emergencies and the local terrain. Each expert must assign a numeric probability to every search segment, estimating the relative chance that the lost subject is in that segment. One additional segment, the Rest of the World (ROW) must also be rated. Each evaluator's percentages for all search segments plus the ROW must equal 1.00 (or 100%). The average of all values assigned to each segment constitutes the so-called Mattson Consensus. This represents the best guess about where the subject might be found, based on the experience and subjective "hunches" of a team of local area experts. These judgments combine knowledge of the area with that of lost person behavioral characteristics. The Mattson Consensus is simply an averaging of these experts' opinions. It is a starting estimate of what the experts feel the subject's chances are of being in each search segment. This Probability of Area (POA) for each segment determines where search and rescue resources will initially be deployed. While POA is not a true probability but a subjective estimate, it is still a critical step in search evolution. Besides its primary role of getting the search started by creating a hierarchy of POAs for the entire search area, the Mattson Consensus serves an additional, more subtle role. Each time a segment is searched, the current POA is updated, based on the varying efficiencies of the search resources. The Mattson Consensus calculated at the start of the search provides the weighting variables for updating or redistributing POAs for each segment throughout the entire search. Clearly, the Mattson Consensus is a critical component of the search management function. It establishes the initial distribution of POAs in the search area, and repeatedly affects the redistribution of these POAs after resources are deployed (and Probabilities of Detection or PODs for each operational period become available). To be effective, the Mattson Consensus should be an unbiased reflection of the search evaluators' judgement. However, four major potential sources of bias have been discovered in actual and simulated calculations of the Mattson Consensus:
We'd like to expand on the first three problems in more detail. For training purposes, a Mattson Consensus is often simulated with four or five search segments. In this case, it is fairly easy to assign POAs that add up to 100%. In real searches, however, there may be 20 or more segments that need to be evaluated. The greater the number of segments, the greater the potential for a wrong total. Another problem arises when strong clues are present at the beginning of a search. Large perceived differences between segment POAs have been shown to result in a curious phenomenon. Some evaluators become so convinced that the subject is in a particular segment that they rate one or more of the other segments with a POA of zero. If it is physically possible for the subject to reach such "unlikely" segments, no matter how strong the clues in the more likely segments, some level of chance that the subject is in the search segment should be assigned. A zero POA means there is absolutely no chance the subject is in the segment, and can we ever be sure of that? If this is truly the case, then this particular search segment should have been part of the ROW. Numerical Subjectivity can bias results because of the preconceived notions each evaluator has about the relative worth of numbers. An accountant may expect a number to be accurate to within two decimal places, while a trained statistician sees numbers as "indicators" surrounded by an error band in which the real value falls. In the subjective context of a Mattson Consensus, the actual numeric value assigned to a segment is less important than what the number means relative to the values of the other segments. For example, suppose we have a search with three segments. After a Mattson Consensus, the initial POAs are 10%, 40%, and 45% for segments 1, 2, and 3, respectively. The ROW equals 5%. The segment with the POA of 10% has the lowest search priority, and, based on the experts' opinions, is less than one-fourth as likely to contain the subject than the highest segment. In an 18-segment search area, however, suppose 17 segments had initial POAs of 5%, 1 had a POA of 10%, and the ROW was 5%. In this case, the segment with a POA of 10% obviously has the highest search priority. The experts who participated in this Mattson Consensus are indicating that they believe the chance of finding the subject in that particular segment is twice as likely as any other segment. In a Mattson Consensus, one's feeling about what a particular number "means" is meaningless outside of how this number relates to the values assigned to other segments. An efficient Mattson Consensus will optimize the relative differences across segments and avoid hair-splitting exercises in assigning numeric probabilities. A mathematically sophisticated search manager realizes that two POAs, one 21% and the other 19%, may be virtually identical, since they are based on a range of hunches, experience and numeric subjectivity of the evaluators. Only as the gap between the two POAs grows can the search manager increase his confidence that there is a real substantive difference between the two. Finally, a Remainder Bias has been occasionally noted by the authors, in real and simulated search scenarios, when the search area contains a large number of segments. For example, an evaluator faced with 20 segments can easily mismanage the distribution of POA percentiles by giving disproportionately high numeric values to the early segments. By the time the 18th, 19th and 20th segments need to be evaluated, there may not be enough of the starting POA left to allocate. That is, not enough of the original 100% remains to allow the evaluator to indicate how he really feels. Given this situation, an evaluator must go back and reduce the value of some segments to allow enough of a percentage for the last few, while ensuring that the total equals 1 or 100%. The temptation here is to shave percentages off the closest segments, e.g., the 16th and 17th, rather than go back and (as should be done) reallocate percentages across all 20 segments. Conversely, an evaluator may have a large amount of the original 100% left by the time he or she arrives at the final segments. The temptation here is to dump this remainder onto these segments, giving them a search priority higher than intended. In either case, having too little or too much remaining POA to allocate across the last few search segments can bias the Mattson Consensus away from the true averages that should have been computed. To circumvent the above problems, O'Connor has suggested an alternative to the standard Mattson Consensus, based on a scale of relative values. Instead of assigning a numerical value to each segment and the ROW, the expert specifies a letter corresponding to the likelihood that the subject is in a particular segment. Letters are assigned according to the scheme in Table 1.
A numerical value is then associated with each letter, and a POA arrived at based on a simple algorithm. This scheme has some obvious and not so obvious advantages:
How does one arrive at these POAs? The relative method used to reach a Mattson Consensus is based on 9 choices which have been refined to ensure uniformity across the entire scale. Each letter an expert uses (from Table 1) is assigned a numerical value according to the scheme in Table 2.
Next, the expert's total is obtained, and the ratio of the expert's numerically assigned value to the expert's total is that expert's POA for that segment.
A simplified example demonstrates how this rating system works. Imagine (in a search with 2 segments and the ROW) that an expert assigns values as follows:
The lowest letter selected was G, so we use the third line of Table 2. The expert's total will be 9 (1 + 7 + 1 or G + A + G). This individual expert's POA for Segment 1 is 1/9, for Segment 2 is 7/9, and for ROW is 1/9. These can be easily converted to the more useful percentages of 11%, 78%, and 11% (rounded). This expert's values are then averaged with the other experts, and a Mattson Consensus emerges.
Table 2 has been carefully designed so that choices grouped at the top or bottom of the scale maintain the same relative value. An expert does not always have to use an A. For example, consider a second expert who assigns values as follows:
The lowest letter was H, so we use the second line of Table 2 to find a total of 9 (1 + 7 + 1), with exactly the same averages (1/9, 7/9, and 1/9) as the previous expert. Note that the same relative scale has been maintained even though this second set of choices dropped by one level of likelihood. O'Connor's relative method and the traditional numerical method of arriving at a Mattson Consensus have been tested together several times since Fall 1989. These experiments occurred at Cape Cod National Seashore, Massachusetts and Grand Canyon National Park, Arizona. At Cape Cod, members of the National Park Service (NPS) staff reviewed common scenarios that simulated searches in various parts of the park. At Grand Canyon, NPS and Sheriff's Office search personnel, experienced in the Mattson method, were given old search scenarios that contained sketchy subject information typical of the first day of a search. These individuals then gave their initial POAs using both the traditional numerical method and the new relative method. The initial POAs arrived at under either method were so close in value that search resources would have been deployed in virtually the same manner. While larger and smaller scales were tested, the scale of nine values in Table 1 proved to be excellent in mimicking the numerical choices of a traditional Mattson Consensus. Based on the positive feedback we've received, we may soon see the day when the "relative method" becomes the standard for arriving at a Mattson Consensus.
We all know that at the start of a search the search area is defined and should then be segmented. The resulting segments form the basis for developing a Mattson Consensus. What follows are four thoughts on this segmentation process:
|